1,896 research outputs found
STEERER: Resolving Scale Variations for Counting and Localization via Selective Inheritance Learning
Scale variation is a deep-rooted problem in object counting, which has not
been effectively addressed by existing scale-aware algorithms. An important
factor is that they typically involve cooperative learning across
multi-resolutions, which could be suboptimal for learning the most
discriminative features from each scale. In this paper, we propose a novel
method termed STEERER (\textbf{S}elec\textbf{T}iv\textbf{E}
inh\textbf{ER}itance l\textbf{E}a\textbf{R}ning) that addresses the issue of
scale variations in object counting. STEERER selects the most suitable scale
for patch objects to boost feature extraction and only inherits discriminative
features from lower to higher resolution progressively. The main insights of
STEERER are a dedicated Feature Selection and Inheritance Adaptor (FSIA), which
selectively forwards scale-customized features at each scale, and a Masked
Selection and Inheritance Loss (MSIL) that helps to achieve high-quality
density maps across all scales. Our experimental results on nine datasets with
counting and localization tasks demonstrate the unprecedented scale
generalization ability of STEERER. Code is available at
\url{https://github.com/taohan10200/STEERER}.Comment: Accepted by ICCV2023, 9 page
BeeFlow: Behavior Tree-based Serverless Workflow Modeling and Scheduling for Resource-Constrained Edge Clusters
Serverless computing has gained popularity in edge computing due to its
flexible features, including the pay-per-use pricing model, auto-scaling
capabilities, and multi-tenancy support. Complex Serverless-based applications
typically rely on Serverless workflows (also known as Serverless function
orchestration) to express task execution logic, and numerous application- and
system-level optimization techniques have been developed for Serverless
workflow scheduling. However, there has been limited exploration of optimizing
Serverless workflow scheduling in edge computing systems, particularly in
high-density, resource-constrained environments such as system-on-chip clusters
and single-board-computer clusters. In this work, we discover that existing
Serverless workflow scheduling techniques typically assume models with limited
expressiveness and cause significant resource contention. To address these
issues, we propose modeling Serverless workflows using behavior trees, a novel
and fundamentally different approach from existing directed-acyclic-graph- and
state machine-based models. Behavior tree-based modeling allows for easy
analysis without compromising workflow expressiveness. We further present
observations derived from the inherent tree structure of behavior trees for
contention-free function collections and awareness of exact and empirical
concurrent function invocations. Based on these observations, we introduce
BeeFlow, a behavior tree-based Serverless workflow system tailored for
resource-constrained edge clusters. Experimental results demonstrate that
BeeFlow achieves up to 3.2X speedup in a high-density, resource-constrained
edge testbed and 2.5X speedup in a high-profile cloud testbed, compared with
the state-of-the-art.Comment: Accepted by Journal of Systems Architectur
Revisiting Panel Data Discrete Choice Models with Lagged Dependent Variables
This paper revisits the identification and estimation of a class of
semiparametric (distribution-free) panel data binary choice models with lagged
dependent variables, exogenous covariates, and entity fixed effects. Using an
"identification at infinity" argument, we show that the model is point
identified in the presence of a free-varying continuous covariate. In contrast
with the celebrated Honore and Kyriazidou (2000), our method permits time
trends of any form and does not suffer from the "curse of dimensionality". We
propose an easily implementable conditional maximum score estimator. The
asymptotic properties of the proposed estimator are fully characterized. A
small-scale Monte Carlo study demonstrates that our approach performs
satisfactorily in finite samples. We illustrate the usefulness of our method by
presenting an empirical application to enrollment in private hospital insurance
using the HILDA survey data
Correcting soft errors online in fast fourier transform
While many algorithm-based fault tolerance (ABFT) schemes have been proposed to detect soft errors offline in the fast Fourier transform (FFT) after computation finishes, none of the existing ABFT schemes detect soft errors online before the computation finishes. This paper presents an online ABFT scheme for FFT so that soft errors can be detected online and the corrupted computation can be terminated in a much more timely manner. We also extend our scheme to tolerate both arithmetic errors and memory errors, develop strategies to reduce its fault tolerance overhead and improve its numerical stability and fault coverage, and finally incorporate it into the widely used FFTW library - one of the today's fastest FFT software implementations. Experimental results demonstrate that: (1) the proposed online ABFT scheme introduces much lower overhead than the existing offline ABFT schemes; (2) it detects errors in a much more timely manner; and (3) it also has higher numerical stability and better fault coverage
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
Residual networks have shown great success and become indispensable in
today's deep models. In this work, we aim to re-investigate the training
process of residual networks from a novel social psychology perspective of
loafing, and further propose a new training strategy to strengthen the
performance of residual networks. As residual networks can be viewed as
ensembles of relatively shallow networks (i.e., \textit{unraveled view}) in
prior works, we also start from such view and consider that the final
performance of a residual network is co-determined by a group of sub-networks.
Inspired by the social loafing problem of social psychology, we find that
residual networks invariably suffer from similar problem, where sub-networks in
a residual network are prone to exert less effort when working as part of the
group compared to working alone. We define this previously overlooked problem
as \textit{network loafing}. As social loafing will ultimately cause the low
individual productivity and the reduced overall performance, network loafing
will also hinder the performance of a given residual network and its
sub-networks. Referring to the solutions of social psychology, we propose
\textit{stimulative training}, which randomly samples a residual sub-network
and calculates the KL-divergence loss between the sampled sub-network and the
given residual network, to act as extra supervision for sub-networks and make
the overall goal consistent. Comprehensive empirical results and theoretical
analyses verify that stimulative training can well handle the loafing problem,
and improve the performance of a residual network by improving the performance
of its sub-networks. The code is available at
https://github.com/Sunshine-Ye/NIPS22-ST .Comment: NIPS2022 accep
- …